skip to main content


Search for: All records

Creators/Authors contains: "Yang, Shuo"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available June 4, 2024
  2. Abstract

    A local discontinuous Galerkin (LDG) method for approximating large deformations of prestrained plates is introduced and tested on several insightful numerical examples in Bonito et al. (2022, LDG approximation of large deformations of prestrained plates. J. Comput. Phys., 448, 110719). This paper presents a numerical analysis of this LDG method, focusing on the free boundary case. The problem consists of minimizing a fourth-order bending energy subject to a nonlinear and nonconvex metric constraint. The energy is discretized using LDG and a discrete gradient flow is used for computing discrete minimizers. We first show $\varGamma $-convergence of the discrete energy to the continuous one. Then we prove that the discrete gradient flow decreases the energy at each step and computes discrete minimizers with control of the metric constraint defect. We also present a numerical scheme for initialization of the gradient flow and discuss the conditional stability of it.

     
    more » « less
  3. null (Ed.)
  4. In label-noise learning, estimating the transition matrix is a hot topic as the matrix plays an important role in building statistically consistent classifiers. Traditionally, the transition from clean labels to noisy labels (i.e., clean-label transition matrix (CLTM)) has been widely exploited to learn a clean label classifier by employing the noisy data. Motivated by that classifiers mostly output Bayes optimal labels for prediction, in this paper, we study to directly model the transition from Bayes optimal labels to noisy labels (i.e., Bayes-label transition matrix (BLTM)) and learn a classifier to predict Bayes optimal labels. Note that given only noisy data, it is ill-posed to estimate either the CLTM or the BLTM. But favorably, Bayes optimal labels have less uncertainty compared with the clean labels, i.e., the class posteriors of Bayes optimal labels are one-hot vectors while those of clean labels are not. This enables two advantages to estimate the BLTM, i.e., (a) a set of examples with theoretically guaranteed Bayes optimal labels can be collected out of noisy data; (b) the feasible solution space is much smaller. By exploiting the advantages, we estimate the BLTM parametrically by employing a deep neural network, leading to better generalization and superior classification performance. 
    more » « less